-
Notifications
You must be signed in to change notification settings - Fork 27
Add support for LoRA #51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
/run-gaudi-tests |
029856e
to
9d92fb2
Compare
/run-gaudi-tests |
Only codeowners can request to run Gaudi tests. Contact list: kzawora-intel, xuechendi, mswiniarsk, adobrzyn |
Dependent on vllm-project/vllm#21923 merge. |
a9c2b75
to
385d861
Compare
/run-gaudi-tests |
@vivekgoe , please add CI trigger in either jenkins folder or tests/full_tests |
/run-gaudi-tests |
@xuechendi I am not familiar with adding CI triggers, is there any example which you can share which I can follow? |
@adobrzyn, how do we suggest for CI, should we add to Jenkins folder - https://github.com/vllm-project/vllm-gaudi/blob/main/.jenkins/test_config.yaml |
@xuechendi LoRA related tests in vllm-fork test_config.yaml have not been migrated to vllm-gaudi jenkins config file. Also unit-tests here are triggered in a different way, from here https://github.com/HabanaAI/vllm-fork/blob/42c53f28da01c39fd4eeffbae79f1c8210c41d47/.jenkins/test_config.yaml#L172 . If I understand correctly, any new test added to unit-tests directory will get automatically picked which is happening but lora test is failing on model loading. |
Please resolve conficts with main |
@adobrzyn @michalkuligowski I will rebase this tomorrow and then will need your help to trigger CI again. |
@adobrzyn @michalkuligowski rebased PR, also checked that 2 new unit-tests added for LoRA pass locally. Please retrigger CI and help review the changes. |
/run-gaudi-tests |
Remove dependency on LoRA worker class First working version with simple example Fixed BS>1 case Fix in platform.py to avoid error due to missing vllm_config Fix No LoRA case Fix warmup with LoRA Minor Cleanup Disable HPU Graphs Clean-up. Minor fixes Signed-off-by: Vivek <[email protected]> Add LoRA unit-test Signed-off-by: Vivek <[email protected]> Move LoRA configuration code to separate function Signed-off-by: Vivek <[email protected]> Add Multilora test Signed-off-by: Vivek <[email protected]> Fix mypy error Signed-off-by: Vivek <[email protected]> Update hpu_lora to use patching Signed-off-by: Vivek <[email protected]> Fix for model load error in CI Signed-off-by: Vivek <[email protected]>
Signed-off-by: Vivek <[email protected]>
/run-gaudi-tests |
Signed-off-by: Vivek <[email protected]>
/run-gaudi-tests |
No description provided.